The Principle of Incomplete Knowledge
The model embodied in a control system is necessarily incomplete
|
This principle can be deduced from a lot of other, more specific principles: Heisenberg's uncertainty principle, implying that the information a control system can get is necessarily incomplete; the relativistic principle of the finiteness of the speed of light, implying that the moment information arrives, it is already obsolete to some extent; the principle of bounded rationality (Simon, 1957), stating that a decision-maker in a real-world situation will never have all information necessary for making an optimal decision; the principle of the partiality of self-reference (Lšfgren, 1990), a generalization of Gšdel's incompleteness theorem, implying that a system cannot represent itself completely, and hence cannot have complete knowledge of how its own actions may feed back into the perturbations. As a more general argument, one might note that models must be simpler than the phenomena they are supposed to model. Otherwise, variation and selection processes would take as much time in the model as in the real world, and no anticipation would be possible, precluding any control. Finally, models are constructed by blind variation processes, and, hence, cannot be expected to reach any form of complete representation of an infinitely complex environment.
Copyright© 1993 Principia Cybernetica -
Referencing this page
|
|
|